4 research outputs found

    Development and Validation of Deep Learning Transformer Models for Building a Comprehensive and Real-time Trauma Observatory

    Get PDF
    BACKGROUND In order to study the feasibility of setting up a national trauma observatory in France, OBJECTIVE we compared the performance of several automatic language processing methods on a multi-class classification task of unstructured clinical notes. METHODS A total of 69,110 free-text clinical notes related to visits to the emergency departments of the University Hospital of Bordeaux, France, between 2012 and 2019 were manually annotated. Among those clinical notes 22,481 were traumas. We trained 4 transformer models (deep learning models that encompass attention mechanism) and compared them with the TF-IDF (Term- Frequency - Inverse Document Frequency) associated with SVM (Support Vector Machine) method. RESULTS The transformer models consistently performed better than TF-IDF/SVM. Among the transformers, the GPTanam model pre-trained with a French corpus with an additional auto-supervised learning step on 306,368 unlabeled clinical notes showed the best performance with a micro F1-score of 0.969. CONCLUSIONS The transformers proved efficient multi-class classification task on narrative and medical data. Further steps for improvement should focus on abbreviations expansion and multiple outputs multi-class classification

    De-identification of Emergency Medical Records in French: Survey and Comparison of State-of-the-Art Automated Systems

    Get PDF
    International audienceIn France, structured data from emergency room (ER) visits are aggregated at the national level to build a syndromic surveillance system for several health events. For visits motivated by a traumatic event, information on the causes are stored in free-text clinical notes. To exploit these data, an automated de-identification system guaranteeing protection of privacy is required.In this study we review available de-identification tools to de-identify free-text clinical documents in French. A key point is how to overcome the resource barrier that hampers NLP applications in languages other than English. We compare rule-based, named entity recognition, new Transformer-based deep learning and hybrid systems using, when required, a fine-tuning set of 30,000 unlabeled clinical notes. The evaluation is performed on a test set of 3,000 manually annotated notes.Hybrid systems, combining capabilities in complementary tasks, show the best performance. This work is a first step in the foundation of a national surveillance system based on the exhaustive collection of ER visits reports for automated trauma monitoring

    Classification automatique du langage de données du service hospitalier des urgences

    Get PDF
    Des modèles basés sur l'architecture Transformer qui intègrent une étape de pré-entrainement non supervisé à objectif prédictif, tels que le GPT-2 (Generative Pretrained Transformer 2) ont atteint récemment des succès remarquables. Nous avons adapté et mis en oeuvre un modèle de traitement automatique du langage naturel (NLP pour Natural Language Processing) permettant de déterminer si un texte libre clinique est de nature traumatique ou non. Nous avons comparé cette approche, nécessitant un nombre d'échantillons annotés réduit, à une approche entièrement supervisée. Nos résultats (basés sur l'AUC et le F1-score) montrent qu'il est possible d'adapter un modèle polyvalent tel que le GPT-2 pour créer un outil puissant de classification de notes de texte libre en français avec seulement un très faible nombre d'échantillons labélisés

    Deep Learning Transformer Models for Building a Comprehensive and Real-time Trauma Observatory: Development and Validation Study

    No full text
    International audienceBackground Public health surveillance relies on the collection of data, often in near-real time. Recent advances in natural language processing make it possible to envisage an automated system for extracting information from electronic health records. Objective To study the feasibility of setting up a national trauma observatory in France, we compared the performance of several automatic language processing methods in a multiclass classification task of unstructured clinical notes. Methods A total of 69,110 free-text clinical notes related to visits to the emergency departments of the University Hospital of Bordeaux, France, between 2012 and 2019 were manually annotated. Among these clinical notes, 32.5% (22,481/69,110) were traumas. We trained 4 transformer models (deep learning models that encompass attention mechanism) and compared them with the term frequency–inverse document frequency associated with the support vector machine method. Results The transformer models consistently performed better than the term frequency–inverse document frequency and a support vector machine. Among the transformers, the GPTanam model pretrained with a French corpus with an additional autosupervised learning step on 306,368 unlabeled clinical notes showed the best performance with a micro F1-score of 0.969. Conclusions The transformers proved efficient at the multiclass classification of narrative and medical data. Further steps for improvement should focus on the expansion of abbreviations and multioutput multiclass classification
    corecore